Engineering Mind Blindness,.. Part 1

  

 

The Invisible Foundation

Why Your Brain Ignores the Most Important Part of Every Simulation

Part One: The Problem

 

By Joseph McFadden Sr.

Engineering Fellow, Zebra Technologies  |  Professor of Mechanical Engineering, Fairfield University

McFaddenCAE.com

 

 

Part of the Building Intuition Before Equations series

Continued in Part Two: Training the Checking Habit


 

 

Let me start with something that has nothing to do with engineering.

 

Right now, as you read this, your eyes are doing something extraordinary. You think you're seeing the world in high definition — a crisp, continuous panorama of color and detail. But you're not. Not even close. Only about one percent of your visual field, a tiny spot called the fovea, actually sees in high resolution. The other ninety-nine percent? It's a blur. A smear. Like looking through frosted glass.

 

And yet you don't notice. You never notice. Because your brain is constructing a seamless illusion for you — filling in the gaps, predicting what should be there, stitching together a world from scraps of data and a lifetime of expectations.

 

Your brain does this because it has to. It's the most metabolically expensive organ you own — two percent of your body weight consuming twenty percent of your energy, burning through the equivalent of about twenty watts every second you're alive. In children, the brain can consume up to fifty percent of the body's total energy budget. That's an enormous bill. And evolution, which is the most ruthless cost accountant that ever existed, solved this problem the only way it could: by making your brain a prediction machine.

 

Karl Friston at University College London formalized this as the free energy principle. The idea is elegant: your brain doesn't process every bit of incoming sensory data. Instead, it builds models of the world and then only pays full attention when reality violates those predictions. The crash of a glass breaking in a quiet room? Your brain snaps to attention — that's a prediction error. But the hum of the air conditioner, the feel of your shirt against your skin, the peripheral shapes at the edges of your vision? Those get suppressed. Filed away. Treated as noise.

 

This exact same mechanism — the one that makes you a brilliant, energy-efficient survivor in the physical world — is the one that makes you skip past the units in a calculation. And that skipping isn't laziness. It's your ancient brain doing exactly what it evolved to do.

 

The problem is that units aren't background noise. They're the deepest signal your work can send about whether you truly understand the physics.

 

Chapter One

The Signal in the Error

 

I teach fracture mechanics and lab courses at Fairfield University, and I've been doing failure analysis and simulation for over forty-four years. Every semester, I watch something happen in my classroom that I've also seen happen in professional engineering organizations, in aerospace programs, and in simulation groups at major corporations. It looks different at each level, but it's the same phenomenon.

 

A student sets up a calculation. Maybe a stress intensity factor, maybe a beam deflection, maybe a natural frequency estimate. The physics is right. The approach is right. The algebra is clean. And then, at the very end, the answer comes out in the wrong units. Or worse, the answer comes out with no units at all — just a naked number sitting on the page.

 

I don't see a unit error as something to punish. I see it as a signal. A diagnostic. When a student drops the units, their brain is telling me something important: they haven't yet built an internal model where the units are inseparable from the meaning of the quantity.

 

Think about what it means to truly understand that stress is force per unit area. Not to recite it. Not to write σ = F/A on command. But to feel, at an intuitive level, that when you say "two hundred ten megapascals," you are describing two hundred ten million newtons pushing on every square meter of surface. If you carry that understanding in your bones, you cannot drop the units — because the units are the meaning. Saying "the stress is two hundred ten" without the megapascals is like saying "the temperature is seventy-two" without specifying Fahrenheit or Celsius. It's not an answer. It's a fragment of one.

 

This is not about being pedantic. This is about fundamentals. The unit error on a quiz isn't the disease — it's the symptom. The disease is surface-level processing, and our brains are wired for it.

 

Chapter Two

Why We Skim the Surface

 

Remember the fovea — that one percent of your visual field that actually sees in high resolution? Your brain uses the other ninety-nine percent as a kind of forward patrol. The peripheral blur detects shapes, motion, rough patterns. Your amygdala and attention networks evaluate what deserves a closer look. Something moves fast? The fovea snaps to it. A shape resembles a predator? Your whole system mobilizes. But a small change in a familiar pattern? That gets filtered out. It's not worth the metabolic cost.

 

Nilli Lavie at University College London has spent decades studying what she calls load theory. Her research shows that when your brain is operating under high cognitive load — and building a finite element model is certainly high cognitive load — the brain doesn't just deprioritize peripheral information. It actively suppresses it. The visual cortex itself shows reduced activity for anything outside the focus of attention. The eyes see, but the brain does not process.

 

There's a famous demonstration of this. Researchers ask people to count basketball passes in a video, and while they're counting, a person in a gorilla suit walks right through the middle of the scene, beats their chest, and walks off. Half the viewers never see the gorilla. The gorilla is fully visible. It's not hidden. It's not subtle. But the brain, loaded with the counting task, literally cannot allocate the resources to perceive it.

 

Now think about what happens when you're building a finite element model. You're managing geometry, mesh settings, contact definitions, material assignments, boundary conditions — cognitive load is high. And what does the brain do with the density value in the material card? It treats it exactly like the gorilla.

 

This is why I've stopped treating unit errors as failures of diligence and started treating them as failures of depth. When a student carries units correctly through a calculation, it tells me they understand the physics well enough that the units are part of their thinking — not an afterthought bolted on at the end. The unit error is a gift, if you know how to read it.

 

Chapter Three

From Quizzes to Cockpits

 

Now let me show you how this same mechanism scales. Because the spectrum of unit failures — from a five-point deduction on a quiz to a catastrophe that makes international headlines — is not a spectrum of different problems. It's the same problem at different altitudes.

 

July 23, 1983. Air Canada Flight 143 takes off from Montreal bound for Edmonton with sixty-one passengers aboard. The aircraft is a brand-new Boeing 767 — one of the first in the fleet calibrated for metric units. The fuel quantity system is malfunctioning, so the ground crew measures fuel manually with a dipstick. They need to convert liters to mass to know how much fuel is on board.

 

The crew multiplies the volume by 1.77 — the density of jet fuel. That number is correct. But it's in pounds per liter, because every other aircraft in the Air Canada fleet uses imperial. The new 767 needs kilograms per liter, which is 0.8. Nobody catches the mismatch. The flight management computer accepts the number without complaint — just as Abaqus accepts whatever numbers you give it without checking units.

 

The result: the aircraft has less than half the fuel it needs. Over northwestern Ontario, at 41,000 feet, both engines flame out. The plane becomes a glider. Through extraordinary airmanship, the captain — who happened to be a trained glider pilot — deadsticks the 767 onto an abandoned airstrip in Gimli, Manitoba, that was being used as a go-kart track that afternoon. Sixty-one passengers walk away. The plane earns the nickname "the Gimli Glider."

 

The crew was not incompetent. They were experienced professionals operating under cognitive load: a malfunctioning fuel system, a new aircraft type, time pressure, a fleet in transition. Their brains reached for the familiar number — 1.77 — and moved on. No prediction error. No surprise signal. No gorilla.

 

The student who writes "stress = 210" without the megapascals and the crew who enters 1.77 without checking the unit system are making the same cognitive error at different altitudes. The brain decided the detail wasn't worth the energy.

 

Chapter Four

Three Hundred Twenty-Seven Million Dollars

 

September 23, 1999. NASA's Mars Climate Orbiter arrives at Mars after a nine-month journey. It's supposed to enter orbit at about 226 kilometers. Instead, it comes in at 57 kilometers — deep in the atmosphere where no spacecraft can survive. It burns up. Three hundred twenty-seven million dollars, gone.

 

The root cause: Lockheed Martin's software calculated thruster impulse in pound-force seconds. NASA's Jet Propulsion Laboratory expected newton-seconds. The conversion factor is 4.45. Every trajectory correction over nine months was off by that factor. The errors accumulated silently, nudging the spacecraft toward destruction.

 

"The problem here was not the error. It was the failure of NASA's systems engineering, and the checks and balances in our processes, to detect the error." — Edward Weiler, NASA Associate Administrator for Space Science

 

The problem was not the error. The problem was that no one detected it. For nine months. Across two organizations. With hundreds of engineers. The pound-force values looked like numbers. They were in the right format, in the right range. They passed the brain's quick pattern check. The gorilla walked through the basketball game for nine months and nobody saw it.

 

Chapter Five

One Point Three Millimeters

 

The Hubble Space Telescope. $1.5 billion. The most sophisticated optical instrument ever built. Launched in April 1990. Six weeks later, the first images came down. They were blurry.

 

The primary mirror had been ground to the wrong shape — too flat near the outer edge by about 2 microns. Two millionths of a meter. The most precisely wrong mirror in the history of optics. The cause: a measuring device had a lens positioned 1.3 millimeters off. Three washers had shifted the reference point just enough to guide the polishing machine to sculpt a perfect mirror to exactly the wrong specification.

 

Here's the part that matters: the error was detected. Twice. A second null corrector clearly showed something was wrong. The technicians dismissed it. They trusted their prediction model — "we have the most perfect mirror ever ground by humans on Earth" — and discounted the contradictory evidence. The brain resolved the conflict by rejecting the surprise rather than investigating it.

 

It took a $700 million shuttle repair mission to install corrective optics — essentially giving the most expensive telescope in history a pair of glasses.

 

This is the prediction machine at its most dangerous: not when it ignores evidence, but when it actively overrides contradictory evidence to protect the existing model.

 

Chapter Six

The Full Spectrum

 

Let me lay out the spectrum explicitly, because seeing it end to end is what makes the point land.

 

At the bottom: a student loses five points for dropping units. I use it as a teaching moment — what does stress physically mean? What would 210 million newtons pushing on a square meter of your desk actually feel like? Once the student can feel the unit, they stop dropping it.

 

One level up: a junior engineer builds a finite element model using vendor material properties. The vendor uses grams, millimeters, and milliseconds. The engineer's model uses tonnes, millimeters, and seconds. Young's modulus is the same — 210,000. Only the density differs: 7.85E-3 versus 7.85E-9. Six orders of magnitude. The model runs. It converges. It produces beautiful contour plots. The mass is wrong by a factor of a million. Nobody catches it because the results look like results.

 

One level up: a design team inherits a simulation model and doesn't verify the unit system. The drop velocity is entered as 4,400 mm/s in a model that expects mm/ms. The simulation said it would pass. The product fails in the field.

 

Higher: the Gimli Glider. Higher still: Tokyo Disneyland, 2003 — Space Mountain derailed because axles were 44.14 mm instead of 45 mm. Less than a millimeter. Structural failure on a ride full of people. At the top: the Mars Climate Orbiter and Hubble.

 

Every single one — from the quiz to Mars — is the same brain making the same energy-conservation decision: "that detail doesn't warrant my attention right now." The magnitude of the consequence is different. The cognitive mechanism is identical.

 

Chapter Seven

Units as DNA

 

I use the word DNA deliberately. DNA encodes the fundamental instructions that determine whether every downstream process produces the correct result. One substitution in three billion base pairs can cause sickle cell disease.

 

That's exactly what unit consistency does in a simulation. Every number in your model — every modulus, every density, every velocity, every force — is meaningless without its unit context. And unlike DNA, which has error-correction mechanisms built into the cellular machinery, Abaqus has no unit system at all. None. Zero. The software enforces nothing. It accepts whatever numbers you give it and solves the math.

 

The results still look like results. You still get stress distributions. You still get displacement fields. The model runs. It converges. It just produces wrong answers with complete confidence.

 

The units aren't labels you attach after the calculation. They're embedded in the meaning of the number from the moment it enters your model. When someone writes a density of 7.85E-9 and knows, without checking a table, that this means tonnes per cubic millimeter, they have internalized the unit system. When they write it and don't know what it represents, they're operating at the surface. And surface-level processing is where the errors live.

 

Chapter Eight

The Systems and Their Fingerprints

 

Knowing your unit system cold — having it in your bones, not just in your notes — is the antidote to the brain's natural tendency to skim.

 

In structural simulation, the most common system — and the one I recommend as your default — is tonne, millimeter, second. Stress comes out in megapascals. Force in newtons. Energy in millijoules. Steel has a Young's modulus of 210,000, a density of 7.85E-9, and gravity is 9,810 mm/s².

 

That density — 7.85E-9 — is the fingerprint.

 

Density magnitude is your most reliable tool for identifying which unit system a model is using. It's the one number that changes dramatically between systems while the modulus can stay in familiar ranges.

 

Unit System Reference  tonne-mm-s: steel density 7.85E-9  |  gram-mm-ms: 7.85E-3  |  kg-mm-s: 7.85E-6  |  SI (kg-m-s): 7,850  |  slinch-in-s: 7.33E-4

 

In explicit dynamics — drop tests, crash, impact — many vendors use gram, millimeter, millisecond. Steel's Young's modulus stays at 210,000 — the same number you know from tonne-millimeter-second. But the density changes from 7.85E-9 to 7.85E-3. Same modulus. Density off by a factor of a million.

 

This trap catches experienced engineers specifically because the modulus looks correct. The brain checks E, sees 210,000, says "that's right," and moves on. The prediction machine is satisfied. But the density is wrong, the mass is off by a factor of a million, and there is no error message anywhere.

 

The imperial system adds its own trap. The consistent mass unit in inch-pound-second is the slinch — about 175 kilograms. Steel density is 7.33E-4 slinches per cubic inch. But the machinist's handbook value of 0.284 is in pounds-mass per cubic inch — making your model 386 times too heavy. Zero point two eight four looks more reasonable than 7.33E-4. The wrong number passes the prediction check. The right number triggers doubt. Your brain's energy-conservation bias pushes you toward the error.

 

Chapter Nine

The New Gorilla in the Room

 

Everything described so far has been about the ancient vulnerability — the prediction machine that evolution gave you, optimized for a world where the familiar was almost always safe to skip.

 

There is a new variable. And it is the most powerful amplifier of that vulnerability ever introduced into engineering practice.

 

Artificial intelligence.

 

Let me be precise, because this is not a blanket warning against using these tools. I partner with artificial intelligence every day. The attribution at the close of this essay says it plainly: formatted and expanded with artificial intelligence — not to be told what to write, but to debate and build upon the work. I am not arguing against artificial intelligence. I am arguing for understanding exactly what it does to the prediction machine — so you can use it deliberately rather than be worked by it quietly.

 

Artificial intelligence produces output that is polished, confident, and structurally complete. When it generates a material card, the format is correct. The field labels are right. The numbers are plausible — a steel modulus of 210,000, a Poisson's ratio of 0.3, a yield strength in a reasonable range.

 

And a density.

 

This is where the gorilla walks in.

 

AI is trained predominantly on SI data — kilograms, meters, seconds. When you ask it to generate or assist with a material card without explicitly specifying your unit system, it may give you a density of 7,850. That is the correct density of steel. In SI. In kilograms per cubic meter.

 

If your model is in tonne-millimeter-second, you need 7.85E-9.

 

The difference is nine orders of magnitude. The AI output looks exactly right — same number format, plausible magnitude, professionally presented. Your prediction machine encounters that material card and classifies the output as handled and moves on.

 

Researchers call this automation bias — the well-documented tendency to over-trust the outputs of systems that are usually right. The more capable the tool, the stronger the bias. The more experienced the engineer, the deeper the prediction model — and the more seamlessly a fluent AI output fits into that model without triggering scrutiny.

 

There is a second dimension worth naming. When you look up steel density in a materials database, you get a number — and if the database is good, you get context: source, testing standard, temperature, confidence. When you ask an artificial intelligence system, you get a number. A confident, fluent, unhedged number. The epistemic uncertainty — the acknowledgment that this depends on alloy, condition, temperature, and that you should verify against your actual material specification — is invisible in the output. The brain reads certainty into confidence. And artificial intelligence is extraordinarily confident.

 

This is not a reason to stop using these tools. It is a reason to understand the risk precisely — so you can design around it.

 

That is the work of Part Two.

 

 

 

Continued in Part Two: Training the Checking Habit

 

Part One has established the mechanism — why the prediction machine is built the way it is, how it scales from a five-point quiz deduction to $327 million at Mars, why units are the DNA of your simulation, and why artificial intelligence may be the most powerful fluency generator ever introduced into your workflow.

 

Part Two delivers the toolkit: the four warning signals the prediction machine leaves before it suppresses, three evidence-based practices for training the checking habit, the individual plan for building these habits into your daily workflow, and the holistic close on what all of this means for simulation engineering in an era of increasingly powerful and fluent tools.

 

 

 

Joseph McFadden Sr. is an Engineering Fellow at Zebra Technologies leading the MEAS (Mechanical Engineering Analysis & Services) team, and a Professor of Mechanical Engineering at Fairfield University. He has over 44 years of experience in failure analysis, CAE simulation, materials science, and expert witness work, and was one of three pioneers who brought Moldflow simulation technology to North America. He writes and teaches under the "Holistic Analyst" and "Building Intuition Before Equations" brands, exploring the intersection of engineering simulation, neuroscience, and systems thinking.

 

All thoughts and ideas are the author's own, formatted and expanded with Claude AI — not to be told what to write, but to debate and build upon the work.

 

This essay is part of the FEA Best Practices series. For more content, tools, and the Abaqus INP Analyzer, visit McFaddenCAE.com.

Next
Next

Engineering Mind Blindness,.. Part 2